-
Notifications
You must be signed in to change notification settings - Fork 162
[RUM-12600] Continuous Benchmarking #3927
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
f8d98a3 to
036f257
Compare
Bundles Sizes Evolution
🚀 CPU Performance
🧠 Memory Performance
|
| test.describe('benchmark', () => { | ||
| createBenchmarkTest('heavy').run(async (page, takeMeasurements) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💭 thought: I think there are more idiomatic ways to declare common test logic in playwright using fixtures. But personally I don't mind your approach, as I find it more explicit than using playwright magic.
4a7b425 to
9d4e4fb
Compare
9f651bf to
0fb5c4c
Compare
0fb5c4c to
915b4f4
Compare
Motivation
We want to monitor the impact of performance-intensive features like Replay and Profiling.
This PR introduces benchmark tests to measure their effect.
Measurements
/performances, except it only collects the total consumption.Scenarios
Currently, we run a single scenario,
heavy.scenario.ts, which executes a sample app from this PR.The scenario doesn’t yet represent a realistic heavy website and will be refined in a follow-up PR.
This scenario runs under four different configurations:
Scheduling
A new
performance-benchmarkGitLab job runs every 30 minutes.Each run executes the tests 15 times.
Visualization
Local: displays a summary table in the console

Datadog dashboard

Changes
Benchmark tests are in
/benchmark. I haven’t replaced/performancesyet, but if we agree on this approach, I’ll remove it in a follow-up PR.Test instructions
Checklist